Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 16 de 16
Filter
Add more filters










Publication year range
1.
PLoS Comput Biol ; 20(2): e1011849, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38315733

ABSTRACT

Sleep deprivation has an ever-increasing impact on individuals and societies. Yet, to date, there is no quick and objective test for sleep deprivation. Here, we used automated acoustic analyses of the voice to detect sleep deprivation. Building on current machine-learning approaches, we focused on interpretability by introducing two novel ideas: the use of a fully generic auditory representation as input feature space, combined with an interpretation technique based on reverse correlation. The auditory representation consisted of a spectro-temporal modulation analysis derived from neurophysiology. The interpretation method aimed to reveal the regions of the auditory representation that supported the classifiers' decisions. Results showed that generic auditory features could be used to detect sleep deprivation successfully, with an accuracy comparable to state-of-the-art speech features. Furthermore, the interpretation revealed two distinct effects of sleep deprivation on the voice: changes in slow temporal modulations related to prosody and changes in spectral features related to voice quality. Importantly, the relative balance of the two effects varied widely across individuals, even though the amount of sleep deprivation was controlled, thus confirming the need to characterize sleep deprivation at the individual level. Moreover, while the prosody factor correlated with subjective sleepiness reports, the voice quality factor did not, consistent with the presence of both explicit and implicit consequences of sleep deprivation. Overall, the findings show that individual effects of sleep deprivation may be observed in vocal biomarkers. Future investigations correlating such markers with objective physiological measures of sleep deprivation could enable "sleep stethoscopes" for the cost-effective diagnosis of the individual effects of sleep deprivation.


Subject(s)
Sleep Deprivation , Voice , Humans , Sleep , Voice Quality , Wakefulness
2.
IEEE Trans Haptics ; PP2024 Feb 28.
Article in English | MEDLINE | ID: mdl-38416623

ABSTRACT

Noisy vibrotactile signals transmitted during tactile explorations of an object provide precious information on the nature of its surface. Understanding the link between signal properties and how they are interpreted by the tactile sensory system remains challenging. In this paper, we investigated human perception of broadband, stationary vibrations recorded during exploration of textures and reproduced using a vibrotactile actuator. Since intensity is a well-established perceptual attribute, we here focused on the relevance of the spectral content. The stimuli were first equalized in perceived intensity and subsequently used to identify the most salient spectral features using dissimilarity estimations between pairs of successive vibration. Based on dimensionally reduced spectral representations, models of dissimilarity ratings showed that the balance between low and high frequencies was the most important cue. Formal validation of this result was achieved through a Mushra experiment, in which participants assessed the fidelity of resynthesized vibrations with various distorted frequency balances. These findings offer valuable insights into human vibrotactile perception and establish a computational framework for analyzing vibrations as humans do. Moreover, they pave the way for signal synthesis and compression based on sparse representations, holding significance for applications involving complex vibratory feedback.

3.
Commun Biol ; 6(1): 671, 2023 06 24.
Article in English | MEDLINE | ID: mdl-37355702

ABSTRACT

The human auditory system is designed to capture and encode sounds from our surroundings and conspecifics. However, the precise mechanisms by which it adaptively extracts the most important spectro-temporal information from sounds are still not fully understood. Previous auditory models have explained sound encoding at the cochlear level using static filter banks, but this vision is incompatible with the nonlinear and adaptive properties of the auditory system. Here we propose an approach that considers the cochlear processes as envelope interpolations inspired by cochlear physiology. It unifies linear and nonlinear adaptive behaviors into a single comprehensive framework that provides a data-driven understanding of auditory coding. It allows simulating a broad range of psychophysical phenomena from virtual pitches and combination tones to consonance and dissonance of harmonic sounds. It further predicts the properties of the cochlear filters such as frequency selectivity. Here we propose a possible link between the parameters of the model and the density of hair cells on the basilar membrane. Cascaded Envelope Interpolation may lead to improvements in sound processing for hearing aids by providing a non-linear, data-driven, way to preprocessing of acoustic signals consistent with peripheral processes.


Subject(s)
Cochlea , Hearing , Humans , Cochlea/physiology , Sound , Basilar Membrane
4.
Cognition ; 238: 105478, 2023 09.
Article in English | MEDLINE | ID: mdl-37196381

ABSTRACT

Within certain categories of geometric shapes, prototypical exemplars that best characterize the category have been evidenced. These geometric prototypes are classically identified through the visual and haptic perception or motor production and are usually characterized by their spatial dimension. However, whether prototypes can be recalled through the auditory channel has not been formally investigated. Here we address this question by using auditory cues issued from timbre-modulated friction sounds evoking human drawing elliptic movements. Since non-spatial auditory cues were previously found useful for discriminating distinct geometric shapes such as circles or ellipses, it is hypothesized that sound dynamics alone can evoke shapes such as an exemplary ellipse. Four experiments were conducted and altogether revealed that a common elliptic prototype emerges from auditory, visual, and motor modalities. This finding supports the hypothesis of a common coding of geometric shapes according to biological rules with a prominent role of sensory-motor contingencies in the emergence of such prototypical geometry.


Subject(s)
Hearing , Movement , Humans , Movement/physiology , Auditory Perception/physiology , Cues
5.
J Acoust Soc Am ; 153(2): 797, 2023 02.
Article in English | MEDLINE | ID: mdl-36859162

ABSTRACT

Timbre provides an important cue to identify musical instruments. Many timbral attributes covary with other parameters like pitch. This study explores listeners' ability to construct categories of instrumental sound sources from sounds that vary in pitch. Nonmusicians identified 11 instruments from the woodwind, brass, percussion, and plucked and bowed string families. In experiment 1, they were trained to identify instruments playing a pitch of C4, and in experiments 2 and 3, they were trained with a five-tone sequence (F#3-F#4), exposing them to the way timbre varies with pitch. Participants were required to reach a threshold of 75% correct identification in training. In the testing phase, successful listeners heard single tones (experiments 1 and 2) or three-tone sequences from (A3-D#4) (experiment 3) across each instrument's full pitch range to test their ability to generalize identification from the learned sound(s). Identification generalization over pitch varies a great deal across instruments. No significant differences were found between single-pitch and multi-pitch training or testing conditions. Identification rates can be predicted moderately well by spectrograms or modulation spectra. These results suggest that listeners use the most relevant acoustical invariance to identify musical instrument sounds, also using previous experience with the tested instruments.


Subject(s)
Cues , Learning , Humans , Generalization, Psychological , Acoustics
6.
JASA Express Lett ; 2(12): 123201, 2022 12.
Article in English | MEDLINE | ID: mdl-36586960

ABSTRACT

Auditory roughness resulting from fast temporal beatings is often studied by summing two pure tones with close frequencies. Interestingly, the tactile counterpart of auditory roughness can be provided through touch with vibrotactile actuators. However, whether auditory roughness could also be perceived through touch and whether it exhibits similar characteristics are unclear. Here, auditory roughness perception and its tactile counterpart were evaluated using pairs of pure tone stimuli. Results revealed similar roughness curves in both modalities, suggesting similar sensory processing. This study attests to the relevance of such a paradigm for investigating auditory and tactile roughness in a multisensory fashion.


Subject(s)
Touch Perception , Acoustic Stimulation/methods , Auditory Perception , Touch
7.
Front Neurosci ; 16: 1075288, 2022.
Article in English | MEDLINE | ID: mdl-36685244

ABSTRACT

The Temporal Voice Areas (TVAs) respond more strongly to speech sounds than to non-speech vocal sounds, but does this make them Temporal "Speech" Areas? We provide a perspective on this issue by combining univariate, multivariate, and representational similarity analyses of fMRI activations to a balanced set of speech and non-speech vocal sounds. We find that while speech sounds activate the TVAs more than non-speech vocal sounds, which is likely related to their larger temporal modulations in syllabic rate, they do not appear to activate additional areas nor are they segregated from the non-speech vocal sounds when their higher activation is controlled. It seems safe, then, to continue calling these regions the Temporal Voice Areas.

8.
J Neurosci Methods ; 362: 109297, 2021 10 01.
Article in English | MEDLINE | ID: mdl-34320410

ABSTRACT

BACKGROUND: Many scientific fields now use machine-learning tools to assist with complex classification tasks. In neuroscience, automatic classifiers may be useful to diagnose medical images, monitor electrophysiological signals, or decode perceptual and cognitive states from neural signals. However, such tools often remain black-boxes: they lack interpretability. A lack of interpretability has obvious ethical implications for clinical applications, but it also limits the usefulness of these tools to formulate new theoretical hypotheses. NEW METHOD: We propose a simple and versatile method to help characterize the information used by a classifier to perform its task. Specifically, noisy versions of training samples or, when the training set is unavailable, custom-generated noisy samples, are fed to the classifier. Multiplicative noise, so-called "bubbles", or additive noise are applied to the input representation. Reverse correlation techniques are then adapted to extract either the discriminative information, defined as the parts of the input dataset that have the most weight in the classification decision, and represented information, which correspond to the input features most representative of each category. RESULTS: The method is illustrated for the classification of written numbers by a convolutional deep neural network; for the classification of speech versus music by a support vector machine; and for the classification of sleep stages from neurophysiological recordings by a random forest classifier. In all cases, the features extracted are readily interpretable. COMPARISON WITH EXISTING METHODS: Quantitative comparisons show that the present method can match state-of-the art interpretation methods for convolutional neural networks. Moreover, our method uses an intuitive and well-established framework in neuroscience, reverse correlation. It is also generic: it can be applied to any kind of classifier and any kind of input data. CONCLUSIONS: We suggest that the method could provide an intuitive and versatile interface between neuroscientists and machine-learning tools.


Subject(s)
Machine Learning , Neural Networks, Computer , Support Vector Machine
9.
Nat Hum Behav ; 5(3): 369-377, 2021 03.
Article in English | MEDLINE | ID: mdl-33257878

ABSTRACT

Humans excel at using sounds to make judgements about their immediate environment. In particular, timbre is an auditory attribute that conveys crucial information about the identity of a sound source, especially for music. While timbre has been primarily considered to occupy a multidimensional space, unravelling the acoustic correlates of timbre remains a challenge. Here we re-analyse 17 datasets from published studies between 1977 and 2016 and observe that original results are only partially replicable. We use a data-driven computational account to reveal the acoustic correlates of timbre. Human dissimilarity ratings are simulated with metrics learned on acoustic spectrotemporal modulation models inspired by cortical processing. We observe that timbre has both generic and experiment-specific acoustic correlates. These findings provide a broad overview of former studies on musical timbre and identify its relevant acoustic substrates according to biologically inspired models.


Subject(s)
Auditory Perception/physiology , Models, Biological , Music , Acoustics , Adult , Datasets as Topic , Humans
10.
J Acoust Soc Am ; 147(5): 3260, 2020 05.
Article in English | MEDLINE | ID: mdl-32486802

ABSTRACT

Natural soundscapes correspond to the acoustical patterns produced by biological and geophysical sound sources at different spatial and temporal scales for a given habitat. This pilot study aims to characterize the temporal-modulation information available to humans when perceiving variations in soundscapes within and across natural habitats. This is addressed by processing soundscapes from a previous study [Krause, Gage, and Joo. (2011). Landscape Ecol. 26, 1247] via models of human auditory processing extracting modulation at the output of cochlear filters. The soundscapes represent combinations of elevation, animal, and vegetation diversity in four habitats of the biosphere reserve in the Sequoia National Park (Sierra Nevada, USA). Bayesian statistical analysis and support vector machine classifiers indicate that: (i) amplitude-modulation (AM) and frequency-modulation (FM) spectra distinguish the soundscapes associated with each habitat; and (ii) for each habitat, diurnal and seasonal variations are associated with salient changes in AM and FM cues at rates between about 1 and 100 Hz in the low (<0.5 kHz) and high (>1-3 kHz) audio-frequency range. Support vector machine classifications further indicate that soundscape variations can be classified accurately based on these perceptually inspired representations.


Subject(s)
Cues , Sound , Animals , Bayes Theorem , Ecosystem , Humans , Pilot Projects
11.
Front Psychol ; 8: 587, 2017.
Article in English | MEDLINE | ID: mdl-28450846

ABSTRACT

The ability of a listener to recognize sound sources, and in particular musical instruments from the sounds they produce, raises the question of determining the acoustical information used to achieve such a task. It is now well known that the shapes of the temporal and spectral envelopes are crucial to the recognition of a musical instrument. More recently, Modulation Power Spectra (MPS) have been shown to be a representation that potentially explains the perception of musical instrument sounds. Nevertheless, the question of which specific regions of this representation characterize a musical instrument is still open. An identification task was applied to two subsets of musical instruments: tuba, trombone, cello, saxophone, and clarinet on the one hand, and marimba, vibraphone, guitar, harp, and viola pizzicato on the other. The sounds were processed with filtered spectrotemporal modulations with 2D Gaussian windows. The most relevant regions of this representation for instrument identification were determined for each instrument and reveal the regions essential for their identification. The method used here is based on a "molecular approach," the so-called bubbles method. Globally, the instruments were correctly identified and the lower values of spectrotemporal modulations are the most important regions of the MPS for recognizing instruments. Interestingly, instruments that were confused with each other led to non-overlapping regions and were confused when they were filtered in the most salient region of the other instrument. These results suggest that musical instrument timbres are characterized by specific spectrotemporal modulations, information which could contribute to music information retrieval tasks such as automatic source recognition.

12.
PLoS One ; 11(4): e0154475, 2016.
Article in English | MEDLINE | ID: mdl-27119411

ABSTRACT

The perception and production of biological movements is characterized by the 1/3 power law, a relation linking the curvature and the velocity of an intended action. In particular, motions are perceived and reproduced distorted when their kinematics deviate from this biological law. Whereas most studies dealing with this perceptual-motor relation focused on visual or kinaesthetic modalities in a unimodal context, in this paper we show that auditory dynamics strikingly biases visuomotor processes. Biologically consistent or inconsistent circular visual motions were used in combination with circular or elliptical auditory motions. Auditory motions were synthesized friction sounds mimicking those produced by the friction of the pen on a paper when someone is drawing. Sounds were presented diotically and the auditory motion velocity was evoked through the friction sound timbre variations without any spatial cues. Remarkably, when subjects were asked to reproduce circular visual motion while listening to sounds that evoked elliptical kinematics without seeing their hand, they drew elliptical shapes. Moreover, distortion induced by inconsistent elliptical kinematics in both visual and auditory modalities added up linearly. These results bring to light the substantial role of auditory dynamics in the visuo-motor coupling in a multisensory context.


Subject(s)
Hand/physiology , Ocular Physiological Phenomena , Photic Stimulation/methods , Adult , Auditory Perception/physiology , Biomechanical Phenomena , Female , Humans , Male , Motion Perception/physiology , Psychomotor Performance , Visual Perception/physiology , Young Adult
13.
Neurosci Lett ; 612: 225-230, 2016 Jan 26.
Article in English | MEDLINE | ID: mdl-26708633

ABSTRACT

Many studies stressed that the human movement execution but also the perception of motion are constrained by specific kinematics. For instance, it has been shown that the visuo-manual tracking of a spotlight was optimal when the spotlight motion complies with biological rules such as the so-called 1/3 power law, establishing the co-variation between the velocity and the trajectory curvature of the movement. The visual or kinesthetic perception of a geometry induced by motion has also been shown to be constrained by such biological rules. In the present study, we investigated whether the geometry induced by the visuo-motor coupling of biological movements was also constrained by the 1/3 power law under visual open loop control, i.e. without visual feedback of arm displacement. We showed that when someone was asked to synchronize a drawing movement with a visual spotlight following a circular shape, the geometry of the reproduced shape was fooled by visual kinematics that did not respect the 1/3 power law. In particular, elliptical shapes were reproduced when the circle is trailed with a kinematics corresponding to an ellipse. Moreover, the distortions observed here were larger than in the perceptual tasks stressing the role of motor attractors in such a visuo-motor coupling. Finally, by investigating the direct influence of visual kinematics on the motor reproduction, our result conciliates previous knowledge on sensorimotor coupling of biological motions with external stimuli and gives evidence to the amodal encoding of biological motion.


Subject(s)
Hand/physiology , Motion Perception , Movement , Psychomotor Performance , Visual Perception , Adult , Female , Humans , Illusions , Male , Photic Stimulation , Young Adult
14.
J Acoust Soc Am ; 140(6): EL478, 2016 12.
Article in English | MEDLINE | ID: mdl-28039992

ABSTRACT

Modulation Power Spectra include dimensions of spectral and temporal modulation that contribute significantly to the perception of musical instrument timbres. Nevertheless, it remains unknown whether each instrument's identity is characterized by specific regions in this representation. A recognition task was applied to tuba, trombone, cello, saxophone, and clarinet sounds resynthesized with filtered spectrotemporal modulations. The most relevant parts of this representation for instrument identification were determined for each instrument. In addition, instruments that were confused with each other led to non-overlapping spectrotemporal modulation regions, suggesting that musical instrument timbres are characterized by specific spectrotemporal modulations.

15.
Hum Mov Sci ; 43: 216-28, 2015 Oct.
Article in English | MEDLINE | ID: mdl-25533208

ABSTRACT

The present study investigated the effect of handwriting sonification on graphomotor learning. Thirty-two adults, distributed in two groups, learned four new characters with their non-dominant hand. The experimental design included a pre-test, a training session, and two post-tests, one just after the training sessions and another 24h later. Two characters were learned with and two without real-time auditory feedback (FB). The first group first learned the two non-sonified characters and then the two sonified characters whereas the reverse order was adopted for the second group. Results revealed that auditory FB improved the speed and fluency of handwriting movements but reduced, in the short-term only, the spatial accuracy of the trace. Transforming kinematic variables into sounds allows the writer to perceive his/her movement in addition to the written trace and this might facilitate handwriting learning. However, there were no differential effects of auditory FB, neither long-term nor short-term for the subjects who first learned the characters with auditory FB. We hypothesize that the positive effect on the handwriting kinematics was transferred to characters learned without FB. This transfer effect of the auditory FB is discussed in light of the Theory of Event Coding.


Subject(s)
Feedback, Sensory , Functional Laterality , Handwriting , Motor Skills , Adult , Female , Humans , Male , Transfer, Psychology , Young Adult
16.
J Exp Psychol Hum Percept Perform ; 40(3): 983-94, 2014 Jun.
Article in English | MEDLINE | ID: mdl-24446717

ABSTRACT

This study investigates the human ability to perceive biological movements through friction sounds produced by drawings and, furthermore, the ability to recover drawn shapes from the friction sounds generated. In a first experiment, friction sounds, real-time synthesized and modulated by the velocity profile of the drawing gesture, revealed that subjects associated a biological movement to those sounds whose timbre variations were generated by velocity profiles following the 1/3 power law. This finding demonstrates that sounds can adequately inform about human movements if their acoustic characteristics are in accordance with the kinematic rule governing actual movements. Further investigations of our ability to recognize drawn shapes were carried out in 2 association tasks in which both recorded and synthesized sounds had to be associated to both distinct and similar visual shapes. Results revealed that, for both synthesized and recorded sounds, subjects made correct associations for distinct shapes, although some confusion was observed for similar shapes. The comparisons made between recorded and synthesized sounds lead to conclude that the timbre variations induced by the velocity profile enabled the shape recognition. The results are discussed in the context of the ecological and ideomotor frameworks.


Subject(s)
Art , Auditory Perception , Concept Formation , Discrimination, Psychological , Form Perception , Motion Perception , Pattern Recognition, Physiological , Psychomotor Performance , Adolescent , Adult , Female , Friction , Gestures , Humans , Male , Psychoacoustics , Sound Localization , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...